125 research outputs found

    A LINEAR-TIME ALGORITHM FOR BROADCAST DOMINATION IN A TREE

    Get PDF
    The broadcast domination problem is a variant of the classical minimum dominating set problem in which a transmitter of power p at vertex v is capable of dominating all vertices within distance p from v. Our goal is to assign a broadcast power f(v) to every vertex v in a graph such that the sum for all v over V of f(v) is minimized, and such that every vertex u with f(u) = 0 is within distance f(v) of some vertex v with f(v) \u3e 0. The problem is solvable in polynomial time on a general graph, and Blair et al. gave an O(n^2) algorithm for trees. We provide an O(n) algorithm for trees. Our algorithm is notable because it makes decisions for each vertex v based on \u27non-local\u27 information from vertices far away from v, whereas almost all other linear-time algorithms for trees only make use of local information

    Batch Testing, Adaptive Algorithms, and Heuristic Applications for Stable Marriage Problems

    Get PDF
    In this dissertation we focus on different variations of the stable matching (marriage) problem, initially posed by Gale and Shapley in 1962. In this problem, preference lists are used to match n men with n women in such a way that no (man, woman) pair exists that would both prefer each other over their current partners. These two would be considered a blocking pair, preventing a matching from being considered stable. In our research, we study three different versions of this problem. First, we consider batch testing of stable marriage solutions. Gusfield and Irving presented an open problem in their 1989 book The Stable Marriage Problem: Structure and Algorithms\u3c\italic\u3e on whether, given a reasonable amount of preprocessing time, stable matching solutions could be verified in less than O(n^2) time. We answer this question affirmatively, showing an algorithm that will verify k different matchings in O((m + kn) log^2 n) time. Second, we show how the concept of an adaptive algorithm can be used to speed up running time in certain cases of the stable marriage problem where the disorder present in preference lists is limited. While a problem with identical lists can be solved in a trivial O(n) running time, we present an O(n+k) time algorithm where the women have identical preference lists, and the men have preference lists that differ in k positions from a set of identical lists. We also show a visualization program for better understanding the effects of changes in preference lists. Finally, we look at preference list based matching as a heuristic for cost based matching problems. In theory, this method can lead to arbitrarily bad solutions, but through empirical testing on different types of random sources of data, we show how to obtain reasonable results in practice using methods for generating preference lists “asymmetrically” that account for long-term ramifications of short-term decisions. We also discuss several ways to measure the stability of a solution and how this might be used for bicriteria optimization approaches based on both cost and stability

    IMPACT OF NEW FARM BILL PROVISIONS ON OPTIMAL RESOURCE ALLOCATION ON HIGHLY ERODIBLE SOILS

    Get PDF
    The study focuses on incentives to produce crops under reduced tillage systems on highly erodible soils. A mixed integer, mathematical programming model was developed to identify optimal resource use under alternative farm program provisions. A positive counter cyclical payment only reinforces the incentive to comply with NRCS soil erosion constrains.Crop Production/Industries,

    Mickey Mantle: An American Legend

    Get PDF

    A New Approach to Intensity-Dependent Normalization of Two-Channel Microarrays

    Get PDF
    A two-channel microarray measures the relative expression levels of thousands of genes from a pair of biological samples. In order to reliably compare gene expression levels between and within arrays, it is necessary to remove systematic errors that distort the biological signal of interest. The standard for accomplishing this is smoothing MA-plots to remove intensity-dependent dye bias and array-specific effects. However, MA methods require strong assumptions. We review these assumptions and derive several practical scenarios in which they fail. The dye-swap normalization method has been much less frequently used because it requires two arrays per pair of samples. We show that a dye-swap is accurate under general assumptions, even under intensity-dependent dye bias, and that a dye-swap provides the minimal information required for removing dye bias from a pair of samples in general. Based on a flexible model of the relationship between mRNA amount and single channel fluorescence intensity, we demonstrate the general applicability of a dye-swap approach. We then propose a common array dye-swap (CADS) method for the normalization of two-channel microarrays. We show that CADS removes both dye-bias and array-specific effects, and preserves the true differential expression signal for every gene. Finally, we discuss some possible extensions of CADS that circumvent the need to use two arrays per pair of samples

    Optimal Feature Selection for Nearest Centroid Classifiers, With Applications to Gene Expression Microarrays

    Get PDF
    Nearest centroid classifiers have recently been successfully employed in high-dimensional applications. A necessary step when building a classifier for high-dimensional data is feature selection. Feature selection is typically carried out by computing univariate statistics for each feature individually, without consideration for how a subset of features performs as a whole. For subsets of a given size, we characterize the optimal choice of features, corresponding to those yielding the smallest misclassification rate. Furthermore, we propose an algorithm for estimating this optimal subset in practice. Finally, we investigate the applicability of shrinkage ideas to nearest centroid classifiers. We use gene-expression microarrays for our illustrative examples, demonstrating that our proposed algorithms can improve the performance of a nearest centroid classifier

    Normalization of two-channel microarrays accounting for experimental design and intensity-dependent relationships

    Get PDF
    eCADS is a new method for multiple array normalization of two-channel microarrays that takes into account general experimental designs and intensity-dependent relationships and allows for a more efficient dye-swap design that requires only one array per sample pair

    Settling the Reward Hypothesis

    Full text link
    The reward hypothesis posits that, "all of what we mean by goals and purposes can be well thought of as maximization of the expected value of the cumulative sum of a received scalar signal (reward)." We aim to fully settle this hypothesis. This will not conclude with a simple affirmation or refutation, but rather specify completely the implicit requirements on goals and purposes under which the hypothesis holds

    Error, reproducibility and sensitivity : a pipeline for data processing of Agilent oligonucleotide expression arrays

    Get PDF
    Background Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2% of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log2 units ( 6% of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells
    • …
    corecore